Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for parameterized tests #51

Merged
merged 19 commits into from
Nov 13, 2023

Conversation

KostkaBrukowa
Copy link
Contributor

@KostkaBrukowa KostkaBrukowa commented Dec 27, 2022

Hey!
This is a PR that allows to see and run parameterized tests in the summary window.
The main algorithm here is:

  1. Run the treesitter queries on the test file.
  2. If the file do not have any parameterized tests (it.each, test.each etc.) do not do anything special just return normal test positions
  3. If file contains parameterized tests then:
  • run jest to discover every test in the file
  • find all of the test results from jest that are on the same position returned by treesitter
  • add new positions to the position returned by a treesitter

Closes: #2

How to run:
Checkout neotest-jest to my branch
Demo:

Screen.Recording.2022-12-27.at.20.29.03.mov

TODO:

  • add support for test.each and it.each
  • add support for describe.each (maybe in different PR)
  • add a configuration option to enable this behavior
  • fix running single parameterized tests (for now if we run e.g. run_nearest jest skips them
  • fix race condition when running test before it updates its name
  • some unit tests

@rcarriga
Copy link
Collaborator

Saw you were blocked by the neotest PR, so got it merged 😄 Very cool to see this!

@KostkaBrukowa
Copy link
Contributor Author

I think that the PR is ready for testing. If someone has the time to checkout to this branch and report any problem I would be grateful.
Unfortunately jest command with jsdom test environment can be really slow (in my bigger project it can take up to 7 seconds) therefore the discovery is slow. If anyone has any ideas that can speed up this process it would be nice to use it here.

@KostkaBrukowa KostkaBrukowa marked this pull request as ready for review March 13, 2023 12:01
@KostkaBrukowa KostkaBrukowa changed the title [WIP] Adds support for parameterized tests Adds support for parameterized tests Mar 13, 2023
@primeapple
Copy link

I'm going to test this.

Copy link

@primeapple primeapple left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you update your description, because the referenced blocking PR on neotest is merged, as @rcarriga stated.

adapters = {
require('neotest-jest')({
...,
jest_test_discovery = false,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
jest_test_discovery = false,
jest_test_discovery = true,

Did you mean this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

acutally I meant to use false here, because if someone does not diable discovery key and automatically will update the plugin, this option will run jest commands on all of the files in the project. I think that this solution should be opt-in

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but wouldn't it be misleading if the e.g. for enabling the discovery displayed the value as false?

lua/neotest-jest/empty.js Outdated Show resolved Hide resolved
@primeapple
Copy link

I have tested this a bit. Works very well. I really like the idea of resorting back to jest to discover the tests. This saves reimplementing all the logic.

Do you think this is mergable?

@KostkaBrukowa
Copy link
Contributor Author

I think after resolving conflicts its mergable. I'll try to resolve them today

@primeapple
Copy link

I think I found some bugs. Consider the following test:

describe.each([null, undefined])('thing', (value) => {
    test('is null', () => {
        expect(value).toBeNull();
    });

    test('is undefined', () => {
        expect(value).toBeUndefined();
    });

    test('is not truthy', () => {
        expect(value).not.toBeTruthy();
    });
});
  1. Running the whole file marks the describe block as failed, the first test as failed, the second one as success (WRONG), the last one as success.
  2. Running only the second test marks it as success (WRONG)
  3. I don't know if this is because of your changes, but see the following test:
describe('thing', () => {
    const value = null;

    test(`is null`, () => {
        expect(value).toBeNull();
    });

    test('is undefined', () => {
        expect(value).toBeUndefined();
    });

    test('is not truthy', () => {
        expect(value).not.toBeTruthy();
    });
});

Now try running only the first test (with the backticks in the testname). It does not work, this test does not get a success/failure symbol, instead the whole describe block is being run.

@KostkaBrukowa
Copy link
Contributor Author

KostkaBrukowa commented Sep 27, 2023

So about the first scenario -> this is a limitation (or not) of the plugin. If you have describe.each([null, undefined])('thing', in your describe and you dont use each name in test name (e.g. thing $s) all the tests will have the same name:
from first run: "thing is null", "thing is undefined", "thing is not truthy"
and from the second run: "thing is null", "thing is undefined", "thing is not truthy"
so we cannot get the names from the output. If you look into some other IDE's there is the same problem there but maybe more visible. I have no idea on how to fix it in the plugin. In the code it would be

describe.each([null, undefined])('thing $s', (value) => {
    test('is null', () => {
        expect(value).toBeNull();
    });

About the second scenario: its the problem on master branch #46

@primeapple
Copy link

Then I'm fine with merging it :)

@primeapple
Copy link

I use this for months now and it seems usable and stable. Any option of merging and/or reviewing it @haydenmeade ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Doesn't support it.each tests
5 participants